mean and variance
Avoiding Non-Integrable Beliefs in Expectation Propagation
Zhao, Zilu, Chen, Jichao, Slock, Dirk
Expectation Propagation (EP) is a widely used iterative message-passing algorithm that decomposes a global inference problem into multiple local ones. It approximates marginal distributions as ``beliefs'' using intermediate functions called ``messages''. It has been shown that the stationary points of EP are the same as corresponding constrained Bethe Free Energy (BFE) optimization problem. Therefore, EP is an iterative method of optimizing the constrained BFE. However, the iterative method may fall out of the feasible set of the BFE optimization problem, i.e., the beliefs are not integrable. In most literature, the authors use various methods to keep all the messages integrable. In most Bayesian estimation problems, limiting the messages to be integrable shrinks the actual feasible set. Furthermore, in extreme cases where the factors are not integrable, making the message itself integrable is not enough to have integrable beliefs. In this paper, two EP frameworks are proposed to ensure that EP has integrable beliefs. Both of the methods allows non-integrable messages. We then investigate the signal recovery problem in Generalized Linear Model (GLM) using our proposed methods.
- North America > United States (0.04)
- Europe > France (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.54)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.34)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.34)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.65)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (4 more...)
- Asia > Malaysia (0.14)
- North America > United States (0.04)
- Europe > United Kingdom > England > Surrey (0.04)
- (3 more...)
sidedCalibrationTheorem
Theorem 2. Suppose that the predictive distribution Q has the sufficient ability to approximate the true unknown distribution P, given data is i.i.d. Lm(P,Q) = 0 if and only if P = Q when F is a unit ball in a universal RKHS [13]. Becausetheconfidencelevelp2 p1 is exactly equal to the proportion of samples {y1,,yn} covered by the two-sided prediction interval. B.1 Baselines MC-Dropout (MCD) [12]: A variant of standard dropout, named as Monte-Carlo Dropout. Heteroscedastic Neural Network (HNN) [17]: In this approach, similar to a heteroscedastic regression, the network has two outputs in the last layer, corresponding to the predicted mean and variance for each input xi.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > District of Columbia > Washington (0.05)
- Asia > China > Beijing > Beijing (0.05)